CSIRO’s ON Innovation Program will host an interactive panel and hands-on workshop showing researchers how to move from “grant-speak” to partnership-ready impact stories that resonate with industry, government and investors.
Unpack the core ingredients for research commercialisation success - clarity, customers, community and impact - and show how these translate into compelling value propositions for non-academic partners.
The panel will explore real-world examples of ON teams who have used customer discovery to identify industry needs, navigate institutional pathways and build collaborations that accelerate translation, from licensing and joint ventures to startups and policy influence.
In the workshop, participants will use practical ON tools (including an Impact & Commercialisation Pathway Canvas and 30/60/90-day impact roadmaps) to reframe their own projects around problems, users and value, then craft concrete next steps towards an industry conversation.
Attendees will leave with a draft impact narrative, clearer language for describing benefits to partners, and a simple action plan to progress at least one potential collaboration opportunity.
This interactive workshop explores how structured dialogue with AI can support research thinking, idea generation, and reflective learning.
Using examples from an experimental e-book project titled 'Idea Builder: A Billion-Dollar Brain', the session demonstrates how conversations between a human and AI can function as a cognitive workflow for developing research questions, clarifying concepts, and generating new ideas.
Participants will learn a simple Human-AI dialogue framework that transforms curiosity into researchable questions. Through guided exercises, attendees will experiment with AI-assisted thinking to explore their own research interests.
The workshop also introduces a broader philosophical perspective on learning beyond traditional classroom structures and discusses the emerging role of AI as a thinking partner in academic knowledge creation.
This interactive session introduces researchers to practical uses of large language models (LLMs) across the research lifecycle. It combines conceptual understanding with hands-on demonstrations of real workflows.
Participants will:
The session is designed to be accessible across disciplines, with examples from scientific and data-driven research.
Passive Acoustic Monitoring (PAM) is revolutionizing how we study biodiversity, allowing us to listen to ecosystems at scales previously impossible. However, the sheer volume of audio data can be overwhelming. This hands-on workshop introduces Open Ecoacoustics, Australia's national infrastructure for managing and analysing environmental sound.
We will navigate the transition from deploying a sensor in the field to generating actionable conservation insights. Participants will receive a guided tour of Ecosounds and the A20 desktop tool, learning how to securely upload massive datasets and leverage cutting-edge AI. We’ll explore how tools like Google's Perch & BirdNet AI foundation models allow researchers to build species recognizers that scan thousands of hours of audio in minutes. Finally, we’ll discuss how to translate these acoustic detections into robust reports and downstream spatial models to inform policy and habitat management.
Successful industry‑engaged research depends on more than technical expertise — it requires strong relational capability. This interactive workshop explores the relationship competencies researchers need to build, sustain and leverage effective industry partnerships.
Participants will examine practical strategies for trust‑building, boundary spanning, stakeholder intelligence, negotiation and co‑creation. Through applied exercises and real‑world scenarios, attendees will learn how to navigate differing institutional cultures, align expectations, manage risk and design mutually beneficial collaborations.
The session is relevant to HDR candidates, early career researchers and established academics seeking to strengthen the impact and translational potential of their research through industry collaboration.
What's the difference between important and interesting? How can I prove the impact of my project afterward? What's the direct value of this research?
Whether you're trying to raise a new project or focus an existing one, the humble spreadsheet can be a persuasive guide. And surprisingly easy to impress people with, since it's a convenient wrapping of some high-school algebra.
Post-PhD, I've been working to help get research out of the lab, and into the ocean. I'll discuss how technically-simple spreadsheets have directed research investment, demonstrated impacts at different scales, and reminded us of what matters once research leaves our hands. I'll also discuss the limits of the approach.
We'll also build a what-if model, to demonstrate the process in action, and show how the results can highlight the scope for new work ... and what will be difficult to justify.
The famous statistician John Tukey once said: 'The greatest value of a picture is when it forces us to notice what we never expected to see.' As researchers, identifying this 'unseen' in your data is becoming crucial to publishing your next journal paper. While publishing papers is essential, it’s just a part of research communication; to maximise the impact of your work, it’s vital that you make your research findings accessible and engaging to a wide audience. In this workshop, we will discuss the best methods to communicate the story that your data tells.
Explore methods and tools to create publication quality visualisations of structured, tabular data. Tools covered include MS Excel and open-source tool Raw Graphs.
On completion of this hands-on workshop, participants will be familiar with:
Qualitative research projects sometimes get away from us and we find that we have more material than we know what to do with. This workshop aims to give you a pathway for navigating 'too much' with a little bit of computational assistance and *without* giving up on the core qualitative foundation of of your work.
We'll work through a case study and some demonstration tools to help you:
Don't let limited computing power slow down your research. The ARDC Nectar Research Cloud provides fast and scalable computing resources tailored specifically for research.
Whether you need to run intensive data analyses and complex simulations, train AI and ML models, manage big data or collaborate seamlessly across institutions, Nectar gives you the computational power to scale up your work.
Join us for an interactive introduction to Nectar, where we'll cover:
No cloud computing or coding experience is required to attend.
Data scientists rely on Python's numpy, scipy, pandas and matplotlib libraries for most of their analysis. Why is this? Do you know the Python code these packages use?
In this workshop we'll dive into the Python code behind common data science tools, focusing on features often missed during data-oriented tutorials, to bridge the gap between Python for programmers and Python for data scientists.
We'll look at questions like:
This hands-on workshop introduces dyadic data analysis using R, focusing on data collected from interdependent pairs (e.g., partners, students–teachers, supervisors–employees). Participants will learn the logic of dyadic data, key assumptions, and how to apply appropriate statistical models in R through practical examples.
By the end of the workshop, participants will be able to identify dyadic data structures and conduct a basic dyadic data analysis using R.
Do you have more text than you know what to do with? Did you collect data including text for your project and now feel overwhelmed when you try to analyse? Is there too much? Are you doing the same thing over and over again and feeling like you're not using your time efficiently? Are you worried about missing the forest for the trees (or the trees for the forest)? If any of these apply to you (or you're just interested in learning more) this workshop is for you.
This workshop will introduce the fundamentals of computational text analysis using LADAL. We'll start with the key questions of why and where computational methods might be appropriate for your work before demonstrating a few key computational methods that are relevant for many researchers.
Want to build an AI assistant that guides learning rather than just handing out answers? This 1-hour demo walks through deploying a secure, Retrieval-Augmented Generation (RAG) chatbot using Dify on the ARDC Nectar Research Cloud.
Using an Information Data Spaces (IDSA) learning mentor as our practical use-case, we will demystify the end-to-end pipeline:
Finally, we will explore registering any future RAG training chatbot you develop on DReSA (dresa.org.au), Australia’s national registry for training events and materials, to ensure your tool reaches a nationwide audience.
Artificial Intelligence is rapidly reshaping research practice, yet many researchers remain unsure where to begin, how to use AI responsibly, or how to meaningfully integrate it into their existing workflows.
This presentation introduces practical, research-focused uses of generative AI tools that can support HDR students, early career researchers, and supervisors across disciplines. Participants will explore how GenAI can assist with literature review, research planning, writing support, coding, data analysis, and research communication with a strong emphasis on ethical and transparent use.
The session will also showcase the eGrad School platform, which offers open-access digital modules designed to strengthen researchers’ AI literacy, digital capability, and professional skills. Together, these tools and resources aim to empower researchers to work smarter, more efficiently, and with integrity in an evolving digital research landscape.
This session explores how cinematic thinking and real-time visualisation practices are applied in contemporary creative industries to communicate ideas, narratives, and complex information. Drawing from professional experience in virtual production and real-time cinematic workflows, the session provides insight into how visual storytelling is designed, constructed, and communicated in practice.
The session presents industry case studies and visual breakdowns of real-world projects to demonstrate how cinematic language is translated into real-time environments. Key areas of focus include framing, lighting, camera movement, composition, and narrative design within immersive and screen-based contexts.
Participants will also engage in guided discussion activities that explore how these approaches can inform research communication and interdisciplinary creative practice. The session is designed to be accessible to a broad audience and does not require prior technical experience.
Python can be surprisingly difficult to set up well, partly because there are so many options. We'll cover some of the differences, pros and cons, and have some time to troubleshoot issues that you might be encountering.
We'll cover questions like,
In this session, the participants will learn about a new ethical framework to collect data from social media platforms (Instagram, YouTube, ChatGPT) using data download packages (DDPs) for social science research. I will introduce:
By the end of the session, the participants will be able to:
This workshop will demonstrate best practices in using Microsoft Excel for research, as well as advanced functions and capabilities.
The content covered by this workshop will include:
Ever wanted to run bioinformatics tools from the command line but didn't know where to start? This hands-on workshop is designed for researchers with no programming background who want to harness the power of AI to make genomics analysis faster and less intimidating.
We will work through a complete beginner workflow in four practical steps. First, participants will be introduced to the Linux command line through Windows Subsystem for Linux 2 (WSL2) — no prior Linux experience required. Second, we will install NCBI BLAST+ locally, covering the essential steps of setting up a real bioinformatics tool in a Linux environment. Third, and most importantly, participants will learn how to use a freely available large language model (such as ChatGPT) to generate, explain, and troubleshoot Bash scripts for running BLAST searches — turning natural language questions into working code without writing a single line yourself. Finally, time permitting, we will demonstrate how an AI agent can go one step further: autonomously executing BLAST queries, parsing outputs, and returning results on your behalf.
By the end of this session, participants will be comfortable navigating a Linux terminal, will have a working BLAST installation, and will leave with a practical mental model for using AI as a coding assistant in their own research — applicable well beyond BLAST to any command-line bioinformatics tool.
All you need is a laptop with WSL2 installed. A setup guide will be circulated to registered participants before the session.
Background: The use of automation tools to assist in systematic review production is becoming more common. Most tools assist with, 1) identifying and removing duplicate records and 2) screening and selection of relevant studies. The Evidence Review Accelerator (TERA) supports all tasks in review production. TERA assists with designing the review question, searching for and deduplicating studies, selecting studies, data extraction, conducting multiple types of meta-analysis and writing the methods and results.
TERA is purpose built by the two-week systematic review team (2weekSR) to enable full reviews to be completed in vastly reduced timeframes. We published an analysis of 10 of our reviews showing median time to completion was 11 workdays.
Objectives: This workshop will show participants how to use TERA and how it fits into review workflows to improve the speed and quality of conducting systematic reviews.
Description: The workshop will comprise live demonstrations of TERA, conducted by the TERA developers and expert users of the tools. The live demonstration will be interspersed with hands on tutorials in using the tools. Interactive feedback with the presenters will be encouraged, and sufficient time for this is incorporated in the design of the workshop. The expert skills of the presenters in both conducting reviews and using the tools is a key component of the workshop. All the tools in the workshop are free and available to be used via the Evidence Review Accelerator website: https://tera-tools.com/
Activities/Interaction Plans: The workshop will comprise of the following plan: 1) a brief introduction; 2) an interactive component with the presenters and participants designing and writing a review using the Methods Wizard; 3) a demonstration of creating focused searches using the Word Frequency Analyser, SearchRefiner, Polyglot Search Translator and the Deduplicator; 4) a demonstration of Generative AI powered screening using MechaScreener; 5) a demonstration of citation searching with SpiderCite; 6) interactive discussion and demonstration of the meta-analysis tools, MetaInsight, MetaDTA, MetaWise; 7) Using TERA Farmer to test whether all relevant studies have been found.
Most scientific presentations fail not because the research is weak—but because the story is buried.
This workshop teaches researchers how to transform results and complex science into compelling, logically structured presentations without sacrificing rigor. Participants will learn how to apply narrative principles to scientific talks—clarifying their core message, structuring results for maximum impact, and designing slides that support thinking rather than distract from it.
Through real examples and practical frameworks, attendees will leave with concrete tools to:
This is not a workshop on aesthetics. It is a workshop on thinking clearly—and making that thinking visible.
EcoCommons Australia offers a comprehensive suite of resources for ecological modelling, including an intuitive, user-friendly platform featuring thousands of trusted datasets and a range of expert-developed workflows for species distribution and community modelling.
This workshop will begin with a brief introduction into species distribution model (SDM) theory, followed by a guided tour of the EcoCommons platform and coding notebook workflows. There will also be a focus on selecting appropriate occurrence and environmental data for research aims and questions.
We will cover:
By the end of the workshop, attendees will understand how to:
This interactive workshop introduces a structured and data-driven approach to conducting literature reviews using Excel-based matrix outlining and Power BI dashboards. Participants will learn how to design and build a relational literature review database that supports systematic organization, citation extraction, and writing alignment for academic research (e.g., theses, journal articles, and systematic reviews).
The session will cover:
By the end of the workshop, participants will have a reusable workflow to manage, analyse, and operationalise literature for research writing.
An introduction to Python that places an emphasis on working with and visualising tabular data.
Note that this workshop runs over 1.5 days, from 9:30 am on Tuesday 23rd, to 12:30 pm on Wednesday 24th. Please only book this workshop if you plan to attend it in its entirety.
An introduction to R that places an emphasis on making data analysis reproducible, using examples of data processing and visualisation.
Note that this workshop runs over 1.5 days, from 9:30 am on Tuesday 23rd, to 12:30 pm on Wednesday 24th. Please only book this workshop if you plan to attend it in its entirety.